23 research outputs found

    A New Lower Bound on the Maximum Number of Satisfied Clauses in Max-SAT and its Algorithmic Applications

    Full text link
    A pair of unit clauses is called conflicting if it is of the form (x)(x), (xˉ)(\bar{x}). A CNF formula is unit-conflict free (UCF) if it contains no pair of conflicting unit clauses. Lieberherr and Specker (J. ACM 28, 1981) showed that for each UCF CNF formula with mm clauses we can simultaneously satisfy at least \pp m clauses, where \pp =(\sqrt{5}-1)/2. We improve the Lieberherr-Specker bound by showing that for each UCF CNF formula FF with mm clauses we can find, in polynomial time, a subformula F′F' with m′m' clauses such that we can simultaneously satisfy at least \pp m+(1-\pp)m'+(2-3\pp)n"/2 clauses (in FF), where n"n" is the number of variables in FF which are not in F′F'. We consider two parameterized versions of MAX-SAT, where the parameter is the number of satisfied clauses above the bounds m/2m/2 and m(5−1)/2m(\sqrt{5}-1)/2. The former bound is tight for general formulas, and the later is tight for UCF formulas. Mahajan and Raman (J. Algorithms 31, 1999) showed that every instance of the first parameterized problem can be transformed, in polynomial time, into an equivalent one with at most 6k+36k+3 variables and 10k10k clauses. We improve this to 4k4k variables and (25+4)k(2\sqrt{5}+4)k clauses. Mahajan and Raman conjectured that the second parameterized problem is fixed-parameter tractable (FPT). We show that the problem is indeed FPT by describing a polynomial-time algorithm that transforms any problem instance into an equivalent one with at most (7+35)k(7+3\sqrt{5})k variables. Our results are obtained using our improvement of the Lieberherr-Specker bound above

    Fast branching algorithm for Cluster Vertex Deletion

    Get PDF
    In the family of clustering problems, we are given a set of objects (vertices of the graph), together with some observed pairwise similarities (edges). The goal is to identify clusters of similar objects by slightly modifying the graph to obtain a cluster graph (disjoint union of cliques). Hueffner et al. [Theory Comput. Syst. 2010] initiated the parameterized study of Cluster Vertex Deletion, where the allowed modification is vertex deletion, and presented an elegant O(2^k * k^9 + n * m)-time fixed-parameter algorithm, parameterized by the solution size. In our work, we pick up this line of research and present an O(1.9102^k * (n + m))-time branching algorithm

    Fixed-Parameter Algorithms in Analysis of Heuristics for Extracting Networks in Linear Programs

    Full text link
    We consider the problem of extracting a maximum-size reflected network in a linear program. This problem has been studied before and a state-of-the-art SGA heuristic with two variations have been proposed. In this paper we apply a new approach to evaluate the quality of SGA\@. In particular, we solve majority of the instances in the testbed to optimality using a new fixed-parameter algorithm, i.e., an algorithm whose runtime is polynomial in the input size but exponential in terms of an additional parameter associated with the given problem. This analysis allows us to conclude that the the existing SGA heuristic, in fact, produces solutions of a very high quality and often reaches the optimal objective values. However, SGA contain two components which leave some space for improvement: building of a spanning tree and searching for an independent set in a graph. In the hope of obtaining even better heuristic, we tried to replace both of these components with some equivalent algorithms. We tried to use a fixed-parameter algorithm instead of a greedy one for searching of an independent set. But even the exact solution of this subproblem improved the whole heuristic insignificantly. Hence, the crucial part of SGA is building of a spanning tree. We tried three different algorithms, and it appears that the Depth-First search is clearly superior to the other ones in building of the spanning tree for SGA. Thereby, by application of fixed-parameter algorithms, we managed to check that the existing SGA heuristic is of a high quality and selected the component which required an improvement. This allowed us to intensify the research in a proper direction which yielded a superior variation of SGA

    Vertex Cover Kernelization Revisited: Upper and Lower Bounds for a Refined Parameter

    Get PDF
    An important result in the study of polynomial-time preprocessing shows that there is an algorithm which given an instance (G,k) of Vertex Cover outputs an equivalent instance (G',k') in polynomial time with the guarantee that G' has at most 2k' vertices (and thus O((k')^2) edges) with k' <= k. Using the terminology of parameterized complexity we say that k-Vertex Cover has a kernel with 2k vertices. There is complexity-theoretic evidence that both 2k vertices and Theta(k^2) edges are optimal for the kernel size. In this paper we consider the Vertex Cover problem with a different parameter, the size fvs(G) of a minimum feedback vertex set for G. This refined parameter is structurally smaller than the parameter k associated to the vertex covering number vc(G) since fvs(G) <= vc(G) and the difference can be arbitrarily large. We give a kernel for Vertex Cover with a number of vertices that is cubic in fvs(G): an instance (G,X,k) of Vertex Cover, where X is a feedback vertex set for G, can be transformed in polynomial time into an equivalent instance (G',X',k') such that |V(G')| <= 2k and |V(G')| <= O(|X'|^3). A similar result holds when the feedback vertex set X is not given along with the input. In sharp contrast we show that the Weighted Vertex Cover problem does not have a polynomial kernel when parameterized by the cardinality of a given vertex cover of the graph unless NP is in coNP/poly and the polynomial hierarchy collapses to the third level.Comment: Published in "Theory of Computing Systems" as an Open Access publicatio

    The maximum clique enumeration problem: algorithms, applications, and implementations

    Get PDF
    Background The maximum clique enumeration (MCE) problem asks that we identify all maximum cliques in a finite, simple graph. MCE is closely related to two other well-known and widely-studied problems: the maximum clique optimization problem, which asks us to determine the size of a largest clique, and the maximal clique enumeration problem, which asks that we compile a listing of all maximal cliques. Naturally, these three problems are View MathML /\u3e-hard, given that they subsume the classic version of the View MathML /\u3e-complete clique decision problem. MCE can be solved in principle with standard enumeration methods due to Bron, Kerbosch, Kose and others. Unfortunately, these techniques are ill-suited to graphs encountered in our applications. We must solve MCE on instances deeply seeded in data mining and computational biology, where high-throughput data capture often creates graphs of extreme size and density. MCE can also be solved in principle using more modern algorithms based in part on vertex cover and the theory of fixed-parameter tractability (FPT). While FPT is an improvement, these algorithms too can fail to scale sufficiently well as the sizes and densities of our datasets grow. Results An extensive testbed of benchmark graphs are created using publicly available transcriptomic datasets from the Gene Expression Omnibus (GEO). Empirical testing reveals crucial but latent features of such high-throughput biological data. In turn, it is shown that these features distinguish real data from random data intended to reproduce salient topological features. In particular, with real data there tends to be an unusually high degree of maximum clique overlap. Armed with this knowledge, novel decomposition strategies are tuned to the data and coupled with the best FPT MCE implementations. Conclusions Several algorithmic improvements to MCE are made which progressively decrease the run time on graphs in the testbed. Frequently the final runtime improvement is several orders of magnitude. As a result, instances which were once prohibitively time-consuming to solve are brought into the domain of realistic feasibility

    Towards Optimal and Expressive Kernelization for d-Hitting Set

    Full text link
    d-Hitting Set is the NP-hard problem of selecting at most k vertices of a hypergraph so that each hyperedge, all of which have cardinality at most d, contains at least one selected vertex. The applications of d-Hitting Set are, for example, fault diagnosis, automatic program verification, and the noise-minimizing assignment of frequencies to radio transmitters. We show a linear-time algorithm that transforms an instance of d-Hitting Set into an equivalent instance comprising at most O(k^d) hyperedges and vertices. In terms of parameterized complexity, this is a problem kernel. Our kernelization algorithm is based on speeding up the well-known approach of finding and shrinking sunflowers in hypergraphs, which yields problem kernels with structural properties that we condense into the concept of expressive kernelization. We conduct experiments to show that our kernelization algorithm can kernelize instances with more than 10^7 hyperedges in less than five minutes. Finally, we show that the number of vertices in the problem kernel can be further reduced to O(k^{d-1}) with additional O(k^{1.5 d}) processing time by nontrivially combining the sunflower technique with d-Hitting Set problem kernels due to Abu-Khzam and Moser.Comment: This version gives corrected experimental results, adds additional figures, and more formally defines "expressive kernelization

    Algorithms and experiments for clique relaxations—finding maximum s-plexes

    No full text
    Abstract. We propose new practical algorithms to find degree-relaxed variants of cliques called s-plexes. An s-plex denotes a vertex subset in a graph inducing a subgraph where every vertex has edges to all but at most s vertices in the s-plex. Cliques are 1-plexes. In analogy to the special case of finding maximum-cardinality cliques, finding maximumcardinality s-plexes is NP-hard. Complementing previous work, we develop combinatorial, exact algorithms, which are strongly based on methods from parameterized algorithmics. The experiments with our freely available implementation indicate the competitiveness of our approach, for many real-world graphs outperforming the previously used methods.

    A linear vertex kernel for maximum internal spanning tree

    No full text
    We present a polynomial time algorithm that for any graph G and integer k ≥ 0, either finds a spanning tree with at least k internal vertices, or outputs a new graph GR on at most 3k vertices and an integer k′such that G has a spanning tree with at least k internal vertices if and only if GR has a spanning tree with at least k′internal vertices. In other words, we show that the Maximum Internal Spanning Tree problem parameterized by the number of internal vertices k, has a 3k-vertex kernel. Our result is based on an innovative application of a classical min-max result about hypertrees in hypergraphs which states that a hypergraph H contains a hypertree if and only if H is partition connected
    corecore